8 - Musteranalyse/Pattern Analysis (früher Mustererkennung 2) (PA) [ID:383]
50 von 731 angezeigt

Welcome, welcome to the Tuesday, really Tuesday session. We have today just 45 minutes, unfortunately,

and we will try to get through the chapter on norms, and then we will learn something

about a neural network that is around about 40 years old. It's older than me, but it's

a very nice neural network because it's so simple and a quite interesting idea that is

still used in many applications. Okay, so that's just for the program. Today, no big

picture, so we jump into it right away. And we discussed yesterday different norms. We

talked about the L1 norm, L2 norm, L infinity norm, about the L0 norm, which is called norm,

but it's no norm. I looked it up. Actually, it's no norm due to the failure in the homogeneity

criterion, but it's called in the literature norm. So I added one sentence saying this

is called norm, but actually it's no norm. Even in Wikipedia, it's mentioned like that.

It's interesting. So thanks again for the pointer. So we talked about different norms,

and for us, it's important to look at the unit balls, yeah, at the unit balls, and you

have to keep a few pictures in mind. And that's important for the optimization chapter that

we will consider in a few days or a few lectures from now. This is the unit ball for the L

infinity norm. This one should, you should remember this is for the L2 norm. It's very

symmetric, smooth, and it's a circle. Then we have the L1 norm. That's the diamond. And

basically, if you know these three, that's fine. Yeah, these three, that's fine. Good.

And we have seen how we can use these various norms to do linear regression or line fitting

in 2D. We also have seen that, you know, it makes a big difference which norm you take,

and there is a huge, how should I say, wave in research to consider different norms to

solve very familiar problems in image processing, and very recent conferences basically focus

on this topic. How to use different norms to solve regression problems. It's incredible

what's going on there, especially in the context of compressed sensing. Yeah, so if you feel

like you want to read something more about that, just browse the web regarding compressed

sensing and you will find tons of papers that are not older than three years, and that are

are basically doing L1 norm optimization.

I also pointed out that these problems

can be solved pretty easily.

For you, it's important to burn into the outer shell

of your brain L2 norm closed form solution, period.

Closed form solution, L2 norm closed form solution.

L infinity norm, convex optimization problem.

L1 norm, convex optimization problem

with these curly inequality signs.

You remember the criticism yesterday.

That means component wise, this inequality has to hold.

The first component of this difference vector

has to be larger than the first component of R,

minus R and the first component of plus R.

That's what we have motivated yesterday.

Just to remember, it's not that difficult

to solve these optimization problems.

We can use different norms, L2 norm closed form solution.

That's why many people, many engineers

just use L2 norm least square approaches,

but we can do in many cases something better

by just moving to a different norm in terms of robustness.

Then we have seen different combinations.

Welcome, Christian, you are my candidate.

Have a seat please.

Zugänglich über

Offener Zugang

Dauer

00:44:52 Min

Aufnahmedatum

2009-05-19

Hochgeladen am

2017-07-05 12:43:37

Sprache

en-US

Tags

Analyse Linear PA Regression Norms Dependent Penalty Functions Rosenblatts Perceptron
Einbetten
Wordpress FAU Plugin
iFrame
Teilen